-
Notifications
You must be signed in to change notification settings - Fork 0
feat: RECALL seek cache, default crypto & join() fix (v10.4.1) #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Cache materialized WarpStateV5 at each visited ceiling tick as
content-addressed blobs via @git-stunts/git-cas for near-instant
restoration during seek exploration.
- SeekCachePort abstract interface (get/set/has/keys/delete/clear)
- CasSeekCacheAdapter with LRU eviction, self-healing, optimistic CAS
- seekCacheKey deterministic key builder (v1:t<ceiling>-<sha256hex>)
- WarpGraph.open({ seekCache }) integration
- CLI: --clear-cache, --no-persistent-cache on git warp seek
- Provenance degradation guardrails (E_PROVENANCE_DEGRADED)
- 16 unit tests, ROADMAP milestone RECALL closed (6/6 tasks)
|
Important Review skippedToo many files! This PR contains 265 files, which is 115 over the limit of 150. You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughAdds a persistent seek-materialization cache (SeekCachePort + CasSeekCacheAdapter), integrates cache wiring in WarpGraph and CLI (--clear-cache, --no-persistent-cache), introduces a domain-local defaultCrypto fallback, makes checksum/hash computation always produce values (removes null-guard paths), and updates docs, tests, and package version. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant WarpGraph
participant SeekCachePort
participant CasAdapter as CasSeekCacheAdapter
participant CAS as Git-CAS
Client->>WarpGraph: materialize({ ceiling: N })
WarpGraph->>WarpGraph: buildSeekCacheKey(ceiling, frontier)
WarpGraph->>SeekCachePort: get(key)
alt Cache Hit
SeekCachePort->>CasAdapter: get(key)
CasAdapter->>CAS: fetch blob/tree
CAS-->>CasAdapter: serialized buffer
CasAdapter-->>WarpGraph: buffer
WarpGraph->>WarpGraph: set _provenanceDegraded = true
WarpGraph-->>Client: materialized state (provenance degraded)
else Cache Miss
SeekCachePort-->>WarpGraph: null
WarpGraph->>WarpGraph: materialize from patches
WarpGraph->>SeekCachePort: set(key, serialized buffer)
SeekCachePort->>CasAdapter: set(key, buffer)
CasAdapter->>CAS: store buffer as asset -> tree OID
CasAdapter-->>SeekCachePort: ack
WarpGraph-->>Client: materialized state (fresh)
end
sequenceDiagram
participant User
participant CLI as bin/warp-graph.js
participant Graph as WarpGraph
participant Adapter as CasSeekCacheAdapter
User->>CLI: git warp seek --clear-cache
CLI->>Adapter: instantiate (unless --no-persistent-cache)
CLI->>Adapter: clear()
Adapter->>Adapter: remove index ref (leave CAS objects)
Adapter-->>CLI: cleared
CLI-->>User: Cache cleared
User->>CLI: git warp seek --no-persistent-cache --tick 3
CLI->>Graph: open({ seekCache: null })
Graph->>Graph: materialize bypassing cache
Graph-->>CLI: fresh provenance/state
CLI-->>User: output
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Release Preflight
If you tag this commit as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@CHANGELOG.md`:
- Around line 8-25: Verify that the changelog header for [10.4.0] — 2026-02-09
corresponds to the actual release/tag date for v10.4.0 by checking the
repository release/tag metadata (the git tag v10.4.0 and the GitHub/GitLab
release created_at), and update the date in CHANGELOG.md if the tag/release date
differs; reference the version string "[10.4.0]" and the v10.4.0 tag when making
the correction so the entry stays authoritative.
In `@docs/GUIDE.md`:
- Around line 941-951: The docs text about materialization cache states "Loose
blobs naturally GC after ~2 weeks" as if fixed; update the wording to clarify
that the ~2 weeks is the default prune expiry and is configurable. Modify the
paragraph mentioning `@git-stunts/git-cas` and the sentence about GC timing to say
something like "Loose blobs are removed by GC using the system's prune expiry
(default ~2 weeks) and can be configured" and keep the existing notes about
pinning to a vault; ensure references to commands and provenance behaviors (git
warp seek --clear-cache, --no-persistent-cache, and functions patchesFor /
materializeSlice) remain unchanged.
In `@package.json`:
- Line 3: Update the Node.js engine constraint in package.json to reflect the
new dependency requirement: change the "engines.node" value to ">=22.0.0" (or
alternatively pick a `@git-stunts/git-cas` version compatible with Node 20/21),
ensuring the engine field aligns with `@git-stunts/git-cas`@^3.0.0; look for the
"engines.node" entry in package.json and update it consistently across any other
references (e.g., release notes or CI configs) so the declared runtime matches
the dependency requirement.
In `@src/infrastructure/adapters/CasSeekCacheAdapter.js`:
- Around line 65-70: The cached promise in _getCas() can permanently store a
rejected promise if _initCas() fails; update _getCas() so that when calling or
awaiting this._casPromise (which comes from _initCas()) you handle rejection by
clearing this._casPromise (set it to null or undefined) before rethrowing the
error so subsequent calls will retry initialization; reference the _getCas()
method, this._casPromise field and the _initCas() method when making this
change.
- Around line 113-141: _mutateIndex currently assumes _persistence.updateRef
does CAS semantics but updateRef performs unconditional overwrite, so concurrent
set/delete can silently lose updates and the retry loop never triggers; fix by
changing the write path to use compare-and-swap semantics: have _readIndex
return the current OID along with the index (or add a new _readIndexWithOid),
pass that expectedOid into _writeIndex/_persistence.updateRef so updateRef can
compare expectedOid vs current and throw a specific CASMismatchError on
mismatch, then in _mutateIndex catch only CASMismatchError to retry and rethrow
other errors (preserving the original error); alternatively, if you cannot
change updateRef, document that CasSeekCacheAdapter is single-writer only and
make _mutateIndex surface the original error instead of swallowing it (preserve
and include the caught error when rethrowing).
🧹 Nitpick comments (5)
bin/warp-graph.js (1)
25-25: Prefer a public API over mutatinggraph._seekCachedirectly.
Relying on a private field risks breakage if internals change; consider wiring viaWarpGraph.open({ seekCache })or a dedicated setter.Also applies to: 1946-1955
test/unit/domain/seekCache.test.js (1)
196-212: Tests are coupled to private fields_cachedState,_cachedCeiling,_cachedFrontier.Nullifying internal cache fields (Lines 201–203, repeated in several tests) to force the persistent cache path is fragile — any rename or structural change to WarpGraph internals will break these tests silently. Consider exposing a
clearInMemoryCache()or_resetCeilingCache()test helper on WarpGraph to decouple tests from field-level internals.src/infrastructure/adapters/CasSeekCacheAdapter.js (2)
50-58:maxEntriesof0silently defaults to 200.
maxEntries || DEFAULT_MAX_ENTRIEStreats0as falsy. While0entries would be impractical, if a caller explicitly passes0intending "disabled", they'd get 200 entries instead. Use nullish coalescing for clarity.Proposed fix
- this._maxEntries = maxEntries || DEFAULT_MAX_ENTRIES; + this._maxEntries = maxEntries ?? DEFAULT_MAX_ENTRIES;
261-268:clear()only deletes the ref — orphaned CAS trees/blobs remain.This is consistent with the module doc ("blobs are loose Git objects —
git gcprunes them naturally"), but callers may expectclear()to reclaim space immediately. Worth a brief JSDoc note onclear()that CAS blobs are left forgit gc.src/domain/WarpGraph.js (1)
865-874: Misleading comment: "fire-and-forget" but theset()isawaited.The
awaiton Line 870 blocks return until the cache write completes (or fails). The comment at Line 865 says "fire-and-forget" which typically implies not awaiting. Since failure is caught and swallowed, the behavior is correct but the term is misleading. Either drop theawaitto make it truly fire-and-forget, or fix the comment.Additionally, the
cacheKeyis computed identically on Line 811 and Line 867. Hoist it above the persistent-cache-check block to avoid the redundant SHA-256 computation on cache-miss-then-store.♻️ Proposed fix: hoist cacheKey and fix comment
// Persistent cache check — skip when collectReceipts is requested + let cacheKey; if (this._seekCache && !collectReceipts) { - const cacheKey = buildSeekCacheKey(ceiling, frontier); + cacheKey = buildSeekCacheKey(ceiling, frontier); try { const cached = await this._seekCache.get(cacheKey);- // Store to persistent cache (fire-and-forget for non-receipt paths) + // Store to persistent cache (failure is non-fatal) if (this._seekCache && !collectReceipts && allPatches.length > 0) { - const cacheKey = buildSeekCacheKey(ceiling, frontier); try { + if (!cacheKey) cacheKey = buildSeekCacheKey(ceiling, frontier); const buf = serializeFullStateV5(state, { codec: this._codec }); await this._seekCache.set(cacheKey, buf);
| ## [10.4.0] — 2026-02-09 — RECALL: Seek Materialization Cache | ||
|
|
||
| Caches materialized `WarpStateV5` at each visited ceiling tick as content-addressed blobs via `@git-stunts/git-cas`, enabling near-instant restoration for previously-visited ticks during seek exploration. Blobs are loose Git objects that naturally GC unless pinned to a vault. | ||
|
|
||
| ### Added | ||
|
|
||
| - **`SeekCachePort`** (`src/ports/SeekCachePort.js`): Abstract port for seek materialization cache with `get`, `set`, `has`, `keys`, `delete`, `clear` methods. | ||
| - **`CasSeekCacheAdapter`** (`src/infrastructure/adapters/CasSeekCacheAdapter.js`): Git-CAS backed adapter with rich index metadata (treeOid, createdAt, ceiling, frontierHash, sizeBytes, codec, schemaVersion), LRU eviction (default max 200 entries), self-healing on read miss (removes dead entries when blobs are GC'd), and optimistic retry loop for concurrent index updates. | ||
| - **`seekCacheKey`** (`src/domain/utils/seekCacheKey.js`): Deterministic cache key builder producing `v1:t<ceiling>-<sha256hex>` keys. Uses SHA-256 via `node:crypto` with no fallback. | ||
| - **`buildSeekCacheRef`** in `RefLayout.js`: Builds `refs/warp/<graph>/seek-cache` ref path for the cache index. | ||
| - **`WarpGraph.open({ seekCache })`**: Optional `SeekCachePort` for persistent seek cache injection. Cache is checked after in-memory miss and stored after full materialization in `_materializeWithCeiling`. | ||
| - **`--clear-cache` flag** on `git warp seek`: Purges the persistent seek cache. | ||
| - **`--no-persistent-cache` flag** on `git warp seek`: Bypasses persistent cache for a single invocation (useful for full provenance access or performance testing). | ||
| - **Provenance degradation guardrails**: `_provenanceDegraded` flag on WarpGraph, set on persistent cache hit. `patchesFor()` and `materializeSlice()` throw `E_PROVENANCE_DEGRADED` with clear instructions to re-seek with `--no-persistent-cache`. | ||
| - **`SeekCachePort` export** from main entry point (`index.js`) and TypeScript definitions (`index.d.ts`). | ||
| - **Unit tests** (`test/unit/domain/seekCache.test.js`, 16 tests): Cache key determinism, WarpGraph integration with mock cache (hit/miss/error/degradation), provenance guardrails. | ||
| - **ROADMAP milestone RECALL** (v10.4.0): 6 tasks, all closed. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confirm the 2026-02-09 release date.
Please verify the date matches the intended v10.4.0 tag/release so the changelog stays authoritative.
🤖 Prompt for AI Agents
In `@CHANGELOG.md` around lines 8 - 25, Verify that the changelog header for
[10.4.0] — 2026-02-09 corresponds to the actual release/tag date for v10.4.0 by
checking the repository release/tag metadata (the git tag v10.4.0 and the
GitHub/GitLab release created_at), and update the date in CHANGELOG.md if the
tag/release date differs; reference the version string "[10.4.0]" and the
v10.4.0 tag when making the correction so the entry stays authoritative.
- CasSeekCacheAdapter: fix rejected promise cached permanently in _getCas() — clear on failure so subsequent calls retry init - CasSeekCacheAdapter: use ?? instead of || for maxEntries (0 no longer silently defaults to 200) - CasSeekCacheAdapter: document single-writer constraint in _mutateIndex, preserve original error message on retry exhaustion - CasSeekCacheAdapter: add JSDoc to clear() noting CAS blobs are left for git gc - CasSeekCacheAdapter: document Node >= 22.0.0 requirement (inherited from @git-stunts/git-cas) - WarpGraph: hoist cacheKey above persistent-cache block to avoid redundant SHA-256 on cache-miss-then-store path - WarpGraph: fix misleading "fire-and-forget" comment (we await) - WarpGraph: add public seekCache getter and setSeekCache() method - CLI: use graph.setSeekCache() / graph.seekCache instead of poking graph._seekCache directly - GUIDE.md: clarify GC timing as configurable (~2 weeks default) - CHANGELOG: document Node >= 22 requirement for CasSeekCacheAdapter - index.d.ts: add seekCache getter and setSeekCache() to types
Release Preflight
If you tag this commit as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
bin/warp-graph.js (1)
1761-1872:⚠️ Potential issue | 🟡 MinorDisallow
--clear-cachewith--no-persistent-cache.Passing both currently disables cache wiring, so clear becomes a no-op. Rejecting the combination avoids surprising behavior.
🛡️ Suggested guardrail
if (arg === '--clear-cache') { if (spec.action !== 'status') { throw usageError('--clear-cache cannot be combined with other seek flags'); } + if (spec.noPersistentCache) { + throw usageError('--clear-cache cannot be combined with --no-persistent-cache'); + } spec.action = 'clear-cache'; } else if (arg === '--no-persistent-cache') { + if (spec.action === 'clear-cache') { + throw usageError('--no-persistent-cache cannot be combined with --clear-cache'); + } spec.noPersistentCache = true; }
🤖 Fix all issues with AI agents
In `@index.d.ts`:
- Around line 1490-1494: The TypeScript declaration for setSeekCache restricts
its parameter to SeekCachePort but the getter seekCache and runtime allow null;
update the setSeekCache signature to accept SeekCachePort | null so callers can
detach the cache (referencing the seekCache getter and SeekCachePort type and
the setSeekCache method to locate the change).
In `@src/domain/WarpGraph.js`:
- Around line 829-848: When deserializeFullStateV5(cached, { codec: this._codec
}) throws while reading an entry from this._seekCache (inside the cache-hit
branch after buildSeekCacheKey), delete the offending cache entry so corrupted
payloads don’t repeatedly fail future seeks; specifically, in the catch block
around the cache read/deserialization, call this._seekCache.delete(cacheKey) (or
equivalent) and log the removal, then continue to fall through to full
materialization and _setMaterializedState as before.
In `@src/infrastructure/adapters/CasSeekCacheAdapter.js`:
- Around line 161-177: Enforcement currently evicts by insertion time in
_enforceMaxEntries using index.entries[*].createdAt, which implements FIFO not
LRU; change eviction to use a last-access timestamp (e.g.,
index.entries[*].lastAccessed) by updating that timestamp on cache hits in the
cache read methods (e.g., the methods that return or touch entries) and then
sort in _enforceMaxEntries by lastAccessed (falling back to createdAt if
lastAccessed is missing) so the least-recently-used items are evicted instead of
the oldest inserted ones.
🧹 Nitpick comments (1)
src/infrastructure/adapters/CasSeekCacheAdapter.js (1)
185-192: Consider guarding against malformed cache keys.A malformed key yields
NaNceiling and an empty hash; a small validation makes failures explicit.🔧 Suggested guard
const dashIdx = rest.indexOf('-'); - const ceiling = parseInt(rest.slice(1, dashIdx), 10); + if (!rest.startsWith('t') || dashIdx <= 1) { + throw new Error(`Invalid seek cache key: ${key}`); + } + const ceiling = Number.parseInt(rest.slice(1, dashIdx), 10); + if (!Number.isInteger(ceiling)) { + throw new Error(`Invalid seek cache key: ${key}`); + } const frontierHash = rest.slice(dashIdx + 1); + if (!frontierHash) { + throw new Error(`Invalid seek cache key: ${key}`); + }
| // Persistent cache check — skip when collectReceipts is requested | ||
| let cacheKey; | ||
| if (this._seekCache && !collectReceipts) { | ||
| cacheKey = buildSeekCacheKey(ceiling, frontier); | ||
| try { | ||
| const cached = await this._seekCache.get(cacheKey); | ||
| if (cached) { | ||
| const state = deserializeFullStateV5(cached, { codec: this._codec }); | ||
| this._provenanceIndex = new ProvenanceIndex(); | ||
| this._provenanceDegraded = true; | ||
| await this._setMaterializedState(state); | ||
| this._cachedCeiling = ceiling; | ||
| this._cachedFrontier = frontier; | ||
| this._logTiming('materialize', t0, { metrics: `cache hit (ceiling=${ceiling})` }); | ||
| return state; | ||
| } | ||
| } catch { | ||
| // Cache read failed — fall through to full materialization | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Self-heal corrupted cache entries on deserialize failure.
If deserialization fails (corrupt or incompatible payload), the bad entry remains and will fail on every seek. Dropping it on decode failure avoids repeated fallbacks.
🧹 Suggested self-heal on decode failure
- try {
- const cached = await this._seekCache.get(cacheKey);
- if (cached) {
- const state = deserializeFullStateV5(cached, { codec: this._codec });
- this._provenanceIndex = new ProvenanceIndex();
- this._provenanceDegraded = true;
- await this._setMaterializedState(state);
- this._cachedCeiling = ceiling;
- this._cachedFrontier = frontier;
- this._logTiming('materialize', t0, { metrics: `cache hit (ceiling=${ceiling})` });
- return state;
- }
- } catch {
- // Cache read failed — fall through to full materialization
- }
+ try {
+ const cached = await this._seekCache.get(cacheKey);
+ if (cached) {
+ try {
+ const state = deserializeFullStateV5(cached, { codec: this._codec });
+ this._provenanceIndex = new ProvenanceIndex();
+ this._provenanceDegraded = true;
+ await this._setMaterializedState(state);
+ this._cachedCeiling = ceiling;
+ this._cachedFrontier = frontier;
+ this._logTiming('materialize', t0, { metrics: `cache hit (ceiling=${ceiling})` });
+ return state;
+ } catch {
+ await this._seekCache.delete(cacheKey);
+ }
+ }
+ } catch {
+ // Cache read failed — fall through to full materialization
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Persistent cache check — skip when collectReceipts is requested | |
| let cacheKey; | |
| if (this._seekCache && !collectReceipts) { | |
| cacheKey = buildSeekCacheKey(ceiling, frontier); | |
| try { | |
| const cached = await this._seekCache.get(cacheKey); | |
| if (cached) { | |
| const state = deserializeFullStateV5(cached, { codec: this._codec }); | |
| this._provenanceIndex = new ProvenanceIndex(); | |
| this._provenanceDegraded = true; | |
| await this._setMaterializedState(state); | |
| this._cachedCeiling = ceiling; | |
| this._cachedFrontier = frontier; | |
| this._logTiming('materialize', t0, { metrics: `cache hit (ceiling=${ceiling})` }); | |
| return state; | |
| } | |
| } catch { | |
| // Cache read failed — fall through to full materialization | |
| } | |
| } | |
| // Persistent cache check — skip when collectReceipts is requested | |
| let cacheKey; | |
| if (this._seekCache && !collectReceipts) { | |
| cacheKey = buildSeekCacheKey(ceiling, frontier); | |
| try { | |
| const cached = await this._seekCache.get(cacheKey); | |
| if (cached) { | |
| try { | |
| const state = deserializeFullStateV5(cached, { codec: this._codec }); | |
| this._provenanceIndex = new ProvenanceIndex(); | |
| this._provenanceDegraded = true; | |
| await this._setMaterializedState(state); | |
| this._cachedCeiling = ceiling; | |
| this._cachedFrontier = frontier; | |
| this._logTiming('materialize', t0, { metrics: `cache hit (ceiling=${ceiling})` }); | |
| return state; | |
| } catch { | |
| await this._seekCache.delete(cacheKey); | |
| } | |
| } | |
| } catch { | |
| // Cache read failed — fall through to full materialization | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@src/domain/WarpGraph.js` around lines 829 - 848, When
deserializeFullStateV5(cached, { codec: this._codec }) throws while reading an
entry from this._seekCache (inside the cache-hit branch after
buildSeekCacheKey), delete the offending cache entry so corrupted payloads don’t
repeatedly fail future seeks; specifically, in the catch block around the cache
read/deserialization, call this._seekCache.delete(cacheKey) (or equivalent) and
log the removal, then continue to fall through to full materialization and
_setMaterializedState as before.
…10.4.1) - Add defaultCrypto.js completing BULKHEAD port injection defaults - Fix WarpGraph.join() — .elements.size → orsetElements().length (was always throwing TypeError) - Remove crypto null guards in bitmap builders and StateSerializerV5 - Add 18 new test files covering ports, adapters, utils, and visualization gaps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@test/unit/infrastructure/adapters/NodeHttpAdapter.error.test.js`:
- Around line 79-96: The test in NodeHttpAdapter.error.test.js that sends
bigBody and awaits fetch can race with the server closing the socket, so update
the it('returns 413 when request body exceeds 10MB limit' ...) test to accept
either a successful 413 response or a connection-reset error: perform the fetch
inside a try/catch around the fetch to capture errors from fetch(..., { body:
bigBody }), if you get a Response (res) continue asserting res.status === 413
and res.text() === 'Payload Too Large'; if an error is thrown, assert error.code
=== 'ECONNRESET' (or the platform equivalent), referencing the bigBody, fetch
call, and res variable to locate changes.
🧹 Nitpick comments (5)
test/unit/infrastructure/adapters/GitGraphAdapter.coverage.test.js (1)
115-117: UnusedmockStreamvariable.
mockStreamis declared but never referenced — theexecuteStreammock on lines 120–127 creates its own response objects directly. This dead code can be removed.🧹 Suggested cleanup
- const mockStream = { - collect: vi.fn(), - }; - // First call returns content for file_a, second for file_bsrc/domain/services/BitmapIndexBuilder.js (1)
19-22:computeChecksumis duplicated across three bitmap service files.Identical implementations exist in both
BitmapIndexBuilder.jsandStreamingBitmapIndexBuilder.js(same pattern: canonical stringify → SHA-256 via crypto). A similar function also appears inBitmapIndexReader.jswith an additionalversionparameter and conditional stringify logic. Consider extracting the base checksum pattern to a shared utility to keep implementations in sync and reduce duplication.src/domain/services/StateSerializerV5.js (1)
125-133: JSDoccryptoparam should be marked optional.Since
computeStateHashV5now falls back todefaultCrypto, thecryptoparameter is effectively optional. The JSDoc on line 126 should use bracket syntax to reflect this.📝 Suggested fix
- * `@param` {import('../../ports/CryptoPort.js').default} options.crypto - CryptoPort instance + * `@param` {import('../../ports/CryptoPort.js').default} [options.crypto] - CryptoPort instance (defaults to defaultCrypto)test/unit/domain/WarpGraph.coverageGaps.test.js (1)
280-309: Consider asserting GC result shape more specifically.The "runs GC when tombstone ratio threshold is exceeded" test (line 280) only checks
result.ran,result.result, andresult.reasons.length. Since the state has a live node and zero tombstones, it may be worth assertingresult.result.tombstonesRemoved === 0to confirm the GC ran but found nothing to compact — that would make the test intent clearer and catch regressions in the GC result shape.src/domain/WarpGraph.js (1)
951-962: Correct semantic fix — butorsetElements()is O(n) per call.Switching from
.size(which counts raw entries including tombstones) toorsetElements().length(which counts only live elements) is the correct fix for accurate before/after counts. However,orsetElements()iterates the entire ORSet and filters by containment — it's called 4 times injoin()(twice for nodes, twice for edges).For large graphs this could be noticeable. Consider caching the element arrays or computing counts in a single pass if this becomes a hot path.
| it('returns 413 when request body exceeds 10MB limit', async () => { | ||
| const logger = { error: vi.fn() }; | ||
| const base = await startServer( | ||
| async () => ({ status: 200, body: 'ok' }), | ||
| { logger }, | ||
| ); | ||
|
|
||
| // Send a body larger than 10MB | ||
| const bigBody = Buffer.alloc(11 * 1024 * 1024, 'x'); | ||
| const res = await fetch(`${base}/upload`, { | ||
| method: 'POST', | ||
| body: bigBody, | ||
| }); | ||
|
|
||
| expect(res.status).toBe(413); | ||
| const text = await res.text(); | ||
| expect(text).toBe('Payload Too Large'); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI-breaking: fetch gets ECONNRESET before reading the 413 response.
The server destroys the socket once the incoming body exceeds the 10MB cap. Because fetch is still writing the 11 MB payload, the TCP reset arrives before the 413 response is readable — hence the pipeline failure.
Accept either outcome: a clean 413 or a connection-reset error.
🐛 Proposed fix
it('returns 413 when request body exceeds 10MB limit', async () => {
const logger = { error: vi.fn() };
const base = await startServer(
async () => ({ status: 200, body: 'ok' }),
{ logger },
);
// Send a body larger than 10MB
const bigBody = Buffer.alloc(11 * 1024 * 1024, 'x');
- const res = await fetch(`${base}/upload`, {
- method: 'POST',
- body: bigBody,
- });
-
- expect(res.status).toBe(413);
- const text = await res.text();
- expect(text).toBe('Payload Too Large');
+ try {
+ const res = await fetch(`${base}/upload`, {
+ method: 'POST',
+ body: bigBody,
+ });
+
+ // If the response arrives before the socket is torn down, assert 413
+ expect(res.status).toBe(413);
+ const text = await res.text();
+ expect(text).toBe('Payload Too Large');
+ } catch (err) {
+ // On some platforms / timing, the server resets the connection
+ // before fetch can read the response.
+ expect(err.cause?.code ?? err.code).toBe('ECONNRESET');
+ }
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('returns 413 when request body exceeds 10MB limit', async () => { | |
| const logger = { error: vi.fn() }; | |
| const base = await startServer( | |
| async () => ({ status: 200, body: 'ok' }), | |
| { logger }, | |
| ); | |
| // Send a body larger than 10MB | |
| const bigBody = Buffer.alloc(11 * 1024 * 1024, 'x'); | |
| const res = await fetch(`${base}/upload`, { | |
| method: 'POST', | |
| body: bigBody, | |
| }); | |
| expect(res.status).toBe(413); | |
| const text = await res.text(); | |
| expect(text).toBe('Payload Too Large'); | |
| }); | |
| it('returns 413 when request body exceeds 10MB limit', async () => { | |
| const logger = { error: vi.fn() }; | |
| const base = await startServer( | |
| async () => ({ status: 200, body: 'ok' }), | |
| { logger }, | |
| ); | |
| // Send a body larger than 10MB | |
| const bigBody = Buffer.alloc(11 * 1024 * 1024, 'x'); | |
| try { | |
| const res = await fetch(`${base}/upload`, { | |
| method: 'POST', | |
| body: bigBody, | |
| }); | |
| // If the response arrives before the socket is torn down, assert 413 | |
| expect(res.status).toBe(413); | |
| const text = await res.text(); | |
| expect(text).toBe('Payload Too Large'); | |
| } catch (err) { | |
| // On some platforms / timing, the server resets the connection | |
| // before fetch can read the response. | |
| expect(err.cause?.code ?? err.code).toBe('ECONNRESET'); | |
| } | |
| }); |
🧰 Tools
🪛 GitHub Actions: Release Preflight (PR)
[error] 88-90: TypeError: fetch failed. NodeHttpAdapter error paths > returns 413 when request body exceeds 10MB limit. Caused by ECONNRESET during fetch.
🪛 GitHub Check: preflight
[failure] 88-88: test/unit/infrastructure/adapters/NodeHttpAdapter.error.test.js > NodeHttpAdapter error paths > returns 413 when request body exceeds 10MB limit
TypeError: fetch failed
❯ test/unit/infrastructure/adapters/NodeHttpAdapter.error.test.js:88:17
Caused by: Caused by: Error: write ECONNRESET
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯
Serialized Error: { errno: -104, code: 'ECONNRESET', syscall: 'write' }
🤖 Prompt for AI Agents
In `@test/unit/infrastructure/adapters/NodeHttpAdapter.error.test.js` around lines
79 - 96, The test in NodeHttpAdapter.error.test.js that sends bigBody and awaits
fetch can race with the server closing the socket, so update the it('returns 413
when request body exceeds 10MB limit' ...) test to accept either a successful
413 response or a connection-reset error: perform the fetch inside a try/catch
around the fetch to capture errors from fetch(..., { body: bigBody }), if you
get a Response (res) continue asserting res.status === 413 and res.text() ===
'Payload Too Large'; if an error is thrown, assert error.code === 'ECONNRESET'
(or the platform equivalent), referencing the bigBody, fetch call, and res
variable to locate changes.
Add blank lines before list blocks and align ASCII box-drawing characters in the layer diagram.
The server destroys the socket when the incoming body exceeds 10 MB. On some platforms fetch receives ECONNRESET before the 413 response is readable. Accept either outcome so CI passes on all runtimes.
git-cas@^3.0.0 requires Node >= 22 but git-warp supports Node >= 20. CasSeekCacheAdapter is only used when explicitly injected and already handles init failure gracefully. Moving git-cas to optionalDependencies ensures npm install succeeds on Node 20 without breaking seek cache on Node 22+.
The getter returns SeekCachePort | null but the setter only accepted SeekCachePort. Update the type definition and add a regression test for setSeekCache(null) to detach the cache.
When deserializeFullStateV5 throws on a cached payload, delete the bad entry via seekCache.delete() before falling through to full materialization. Prevents repeated fallback on every seek to the same tick.
Track lastAccessedAt on cache hits and use it (falling back to createdAt) for eviction ordering. Previously eviction was FIFO by creation time, so frequently accessed entries could be dropped.
Add JSDoc type annotations across all src/, bin/, and scripts/ files to achieve 0 tsc errors under tsconfig.src.json (down from 1,513). Stage A (infrastructure): tsconfig split, ratchet script + baseline, pre-push hook, CI enforcement. Stage B (this commit): JSDoc @param/@returns/@typedef/@type annotations on 98 source files. No runtime behavior changes. Cascade-fixed 1,261 test errors (7,123 → 5,862) from improved source typing. Verified: tsc 0 errors, ESLint clean, 3,116 tests passing.
Add --no-warn-ignored flag so ESLint silently skips files matching the global ignore pattern (scripts/**) instead of emitting warnings.
…echeck - Build Docker test image once upfront, reuse with --no-build for unit tests, benchmarks, and BATS (eliminates 2 redundant builds) - Run lint and typecheck ratchet in parallel (both are read-only) - Call test:local/benchmark:local directly instead of going through the GIT_STUNTS_DOCKER shell wrapper
Add JSDoc type annotations across all test/, integration/, and benchmark/ files to achieve 0 tsc errors under all three tsconfigs (src, test, combined) — down from 5,905 test errors. Key patterns: /** @type {any} */ on mock variables, catch blocks, discriminated union accesses, and vi.fn() results. No test behavior changed — all 3,116 tests pass. Combined with Stage B: 7,461 total errors → 0.
…, v10.4.2) - Convert all @ts-ignore → @ts-expect-error (7 files) - Tag ~108 inline @type {*|any} casts with // TODO(ts-cleanup): reason - Add scripts/ts-policy-check.js enforcing both rules in src/bin/scripts - Wire typecheck:policy into CI (ci.yml, release-pr.yml) and pre-push hook - Remove obsolete ts-ratchet.js and ts-error-baseline.json (Stage D cleanup) - Bump to v10.4.2
On Node 20, @git-stunts/git-cas (optional dep, requires Node >= 22) is not installed, causing TS2307 in CasSeekCacheAdapter's dynamic import. Add types/git-cas.d.ts ambient module declaration and include it in tsconfig.json and tsconfig.src.json so TypeScript resolves the module regardless of installation.
Release Preflight
If you tag this commit as |
- Bump engines.node to >=22.0.0 (matches @git-stunts/git-cas). - Move @git-stunts/git-cas from optionalDependencies to dependencies. - Make seek cache write fire-and-forget + CLI process.exit() — seek commands drop from ~30s to <1s by not blocking on background CAS I/O. - Remove BATS E2E from pre-push hook (CI-only) to keep pushes fast. - Update CI matrix, release workflows, and docs for Node 22.
Pre-push now runs lint, typecheck, policy, and unit tests locally. Docker builds, benchmarks, and BATS E2E are CI-only.
Release Preflight
If you tag this commit as |
Summary
@git-stunts/git-cas— cachesWarpStateV5at each visited ceiling tick for near-instant restoration during seek exploration. IncludesSeekCachePort,CasSeekCacheAdapter,--clear-cache/--no-persistent-cacheCLI flags, and provenance degradation guardrails.defaultCrypto.jscompletes the BULKHEAD port injection pattern — all ports now have domain-local defaults. Removes crypto null guards from bitmap builders andStateSerializerV5.WarpGraph.join()bugfix:.elements.size→orsetElements().length— the join happy path was always throwing a TypeError.Test plan
npx vitest run test/unit)Summary by CodeRabbit
New Features
Changed
Bug Fixes
Documentation
Tests